Mainstream smartphones, their on-board sensors and increasing computational power, and remote human and automated web services show immense promise for improved independence and daily accessibility for blind people. Though specialized services and technologies have their important uses, there is a trend to move away from using specialized technology and towards mainstream devices for sustainability, affordability, and range of access.
In this work, I investigate multimodal interaction models and software for mainstream accessibility for blind and low-vision people, for purposes of everyday convenience and practicality, aestheticism and creativity, social inclusion, and education. The study of such mainstream multimodal interactions gives rise to more general interesting research questions that can affect not only blind people but the larger community. How do we design and use mainstream technology as optimally and effectively as possible to include a diverse user population? What can we learn from the way people already use and appropriate mainstream technology for specialized uses? I present work on accessibility via mainstream technology using audio, haptics, and camera focalization (i.e., focusing and localizing objects in the user's environment).
I show how non-visual interactions via mainstream technology can improve the independence and lives of blind and low-vision people by: